Constrained Optimization for Neural Map Formation
نویسندگان
چکیده
Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics that abstract from the neural activity dynamics by an adiabatic approximation, and constrained optimization from which equations governing weight dynamics can be derived. Constrained optimization uses an objective function, from which a weight growth rule can be derived as a gradient flow, and some constraints, from which normalization rules are derived. In this paper we present an example of how an optimization problem can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from two types of objective function terms and two types of constraints. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same optimization problem. Several examples illustrate how the constrained optimization framework can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics.
منابع مشابه
Constrained Optimization for Neural Map Formation: A Unifying Framework for Weight Growth and Normalization
Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics that abstract from the neural activity dynamics by an adiabatic approximation, and constrained optimization from which equations governing weight dynamics can be derived. Constrained optimization uses an objective f...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملMulti-objective optimization of geometrical parameters for constrained groove pressing of aluminium sheet using a neural network and the genetic algorithm
One of sheet severe plastic deformation (SPD) operation, namely constrained groove pressing (CGP), is investigated here in order to specify the optimum values for geometrical variables of this process on pure aluminium sheets. With this regard, two different objective functions, i.e. the uniformity in the effective strain distribution and the necessary force per unit weight of the specimen, are...
متن کاملPROJECTED DYNAMICAL SYSTEMS AND OPTIMIZATION PROBLEMS
We establish a relationship between general constrained pseudoconvex optimization problems and globally projected dynamical systems. A corresponding novel neural network model, which is globally convergent and stable in the sense of Lyapunov, is proposed. Both theoretical and numerical approaches are considered. Numerical simulations for three constrained nonlinear optimization problems a...
متن کاملInversion of Gravity Data by Constrained Nonlinear Optimization based on nonlinear Programming Techniques for Mapping Bedrock Topography
A constrained nonlinear optimization method based on nonlinear programming techniques has been applied to map geometry of bedrock of sedimentary basins by inversion of gravity anomaly data. In the inversion, the applying model is a 2-D model that is composed of a set of juxtaposed prisms whose lower depths have been considered as unknown model parameters. The applied inversion method is a nonli...
متن کامل